![]() Method of displaying map data
专利摘要:
The present invention relates to a method for displaying cartographic data, comprising the following steps: capture using a multimedia electronic device (3) comprising a camera (5) of image data representing a model of a relief geographical; identification of the relief, by comparison with at least one geographic relief reference model; access to additional data; displaying on a display device (7) of said device an image obtained from said image data and said additional data superimposed. 公开号:CH712678A2 申请号:CH00873/17 申请日:2017-07-05 公开日:2018-01-15 发明作者:Marguet Eric;Pestalozzi Patrick 申请人:Spitz & Tal Sa; IPC主号:
专利说明:
Description Technical Field [0001] The present invention relates to the field of cartographic models, as well as to a computer program for providing additional information to the observers of such a model. State of the art [0002] We know in the state of the art models of cartographic reliefs which represent for example a country, a mountain range, a city, etc. in miniature and in three dimensions. By cartographic relief is meant a relief corresponding to a large geographic area, such as a continent, a country, a city, a district, a region, a village, a park, a mountain, a valley, covering for example an area of 5 at least square kilometers. [0003] Such models are often made of wood or plastic and adorn, for example, hotel halls, chalets, tourist office receptions, etc. They are notably used by travelers to plan or relate an excursion or a hike by realizing the terrain much better than by observing a simple paper or electronic map. The manufacturing techniques used for the manufacture of such relief models, however, limit their resolution as well as the amount of information that can appear there. We know for example wooden or plastic models from Switzerland or Alpine art in which the minimum resolution corresponds to 1 kilometer. These models allow you to observe the main mountains, but not to plan an excursion in detail. MA NAN ET AL: "Virtual conservation and interaction with our cultural heritage: Framework for multi-dimension model based interface", 2013 DIGITAL HERITAGE INTERNATIONAL CONGRESS (DIGITALHERITAGE), IEEE, vol. 1.28 October 2013 (2013-10-28), pages 323-330 describe museographic models of the city, as well as a process for converting them into virtual three-dimensional models, on which additional information can be added. This process allows you to explore a virtual model of the city even when you are at a distance from the model. The virtual reality process makes it possible to recover the work of model builders to more quickly generate a virtual environment in which one can navigate even while moving away from the model. It does not, however, provide additional information to the visitor observing the physical model. So it's virtual reality (VR), but not augmented reality. The 3D model obtained from the model can be visited, but it is apparently never aligned with an image of the physical model. Navigating in a fully virtual environment is always less intuitive than observing a model. Many users get lost or lose their sense of direction in a three-dimensional simulation, while the models are easier to grasp. This comes in particular from the limited resolution of the screens of smartphones and even computers, which do not allow to display as much information as a model, and therefore requires scrolling (scrolling) or constantly zooming to go from one overview to a detailed view. For example, an ultra-high resolution computer screen can display approximately 8 million pixels in three dimensions, while a cartographic model can be machined from a cloud of 100 million, or even d 'a billion points, in three dimensions. Additional information projections directly on the model are also described in this document, but it is still not augmented reality. The content of the models is however static and cannot be personalized according to the information of interest to the user. An object of the present invention is to provide a method for increasing the usefulness of such models, and to provide additional information to users admiring them. Description of the invention According to one aspect of the invention, this object is achieved by a method of displaying cartographic data, comprising the following steps: - capture using a multimedia electronic device comprising a video image data camera representing a model of a geographic relief; - identification of the relief, by comparison with at least one geographic relief reference model, so as to determine whether the captured image corresponds to a known model, and to what portion of this model it corresponds; - identification of features in said image data; - access to additional data relating to the relief identified; - alignment of the captured image data with the additional data, based on said features; - display on a display device of said device of an image obtained by superimposing in real time of said additional data on said image data. CH 712 678 A2 [0011] According to one aspect, the invention therefore relates to a method of applying augmented reality to the world of models of geographic reliefs. Augmented reality is naturally known as such. In general, it relates to the superimposition of additional information, for example text, images, videos, textures, etc., on a video image stream. We speak of augmented reality in real time when the offset introduced by the process of calculating the information to be superimposed is small enough to be imperceptible, and the user has the impression of seeing on the screen of his electronic device a modified reality . Augmented reality therefore differs from virtual reality, in which the user moves in an entirely virtual environment, generally modeled in advance. Augmented reality has already been used in the world of cartography. As an example, document US 2011/199,479 describes an augmented reality mapping system and a method in which a user films and watches a real scene by means of a camera and a display screen on a portable pocket device. The geographic position of the user is determined by GPS, and the mapping data is superimposed on the moving image of the scene captured by the camera, so as to provide directions or the like to the user, and to highlight highlight points of interest. Augmented reality is also known for cartographic applications. For example, the UCLive augmented reality card application (see https://ucliveproject.wordpress.com/, archived at https: //web.archive.0rg/web/20140814030527/http: //ucliveproject.wordpress.com /) works as follows. A user takes an image of a standard campus map, and a pre-generated 3D model of the map, including other layers of information, is displayed on the screen of the handheld device. The cropping of the 3D model and the angle of view correspond to the image captured by the camera, so that the user can explore the 2D map in 3D using the handheld device. However, these applications relate to improving 2D information with 3D modeling, or to improving the real world with a map or direction information. As such, they are limited in the level of detail they can provide. The application of augmented reality to models of geographic relief has never been suggested. One of the reasons probably comes from the low resolution and precision of conventional models. These defects make it extremely difficult to align additional data to the image of the model. The resolution of paper maps and other additional data sources available is indeed typically several orders of magnitude greater than that of the best models. It is therefore difficult or even impossible to correctly superimpose useful additional data on the image of a model, making any attempt at augmented reality in this area unnecessary. Recently, advances both in the modeling of geographic reliefs and in the techniques of making models have opened the hope for models of geographic relief of a resolution and a precision considered impossible not long ago. . Such high-resolution models made it possible for the first time, within the framework of the invention, to imagine extremely useful augmented reality applications, as will be seen below, it is for example possible to make a model from '' a substrate (which can be the basis of the model or a mold for the model) by a process incorporating a machining step in which the movements of a cutting tool having a point are expressed as a trajectory of the tool along the perpendicular axes X, Y and Z in a grid in an XY plane calculated on the basis of a point cloud generated by the measurement of the relief, said point cloud comprising a plurality of points which are normalized in a point grid normalized through which the tip of the cutting tool must pass, said trajectory of the tool being generated by sequentially following said normalized grid of points, that is to say in following ant the grid lines one by one in the same directions or in alternate directions. Therefore, a highly detailed rendering of the model can be virtually increased by providing information via the multimedia electronic device. Due to the high precision machining process of the model or its mold, the characteristic elements (i.e. the areas of interest, more commonly known by the English term of features), for example the edges in the relief are reproduced with extreme precision, which is often not the case with conventional calculation and machining techniques. As it is the detection of these features which is used as a reference for the identification of the relief reproduced by the model, then for the alignment of the image of this model with the additional data to be superimposed, the precision of the Identification and overlay is greatly improved, which opens the door to entirely new applications and the overlay of additional data that it would not have been possible to precisely overlay on a conventional mock-up. In addition, the additional data does not need to be marked directly on the mock-up, and a larger quantity and a greater variety of information can be provided compared to the prior art methods. In addition, as the model is in three dimensions, the user experience is improved compared to an augmented reality "flat" which improves for example a flat map with virtual information in 3D: in the present method, a 3D structure is augmented with virtual information (whether in 2D or 3D), and the experience is therefore much more tactile and immediate for the user. In variants, the model can represent an animal or a plant, or another natural object that no model can reproduce with a resolution equivalent to the original (as opposed to an industrial object for example). Advantageously, the step of identifying the model is carried out by at least one of the following steps: CH 712 678 A2 - taking an image and identifying a code provided on or adjacent to the model; - analysis of the topographic characteristics of the model; - communication with a wireless identification device provided in or adjacent to the model, such as an RFID chip, a Bluetooth or WiFi transponder or similar; - Determination of the geographic location of the multimedia electronic device and comparison of this geographic position with a database comprising a register of the geographic locations of said models. Advantageously, the method further comprises a step consisting in determining whether the mock-up is authorized by means of at least one of the following steps: - analysis of benchmarks provided on or adjacent to the model; - communication with a wireless identification device provided in or adjacent to the model; - detection and analysis of an intentional error provided on the model. This step can be integrated with, or separate from, the model identification step, and allows the detection of "false" models. Advantageously, the normalization of said point cloud is carried out by scaling said point cloud to predetermined dimensions, namely the dimensions of the substrate to be machined or to a part of it. In the case where the point cloud is not already in the form of a grid, the normalization of said point cloud also comprises a step of transforming said point cloud into a uniformly spaced grid having a desired resolution by modifying the X and Y coordinates of each point in order to conform to said grid. Advantageously, the normalization of said point cloud can also include filling in the missing points on said grid and / or eliminating excess points. Advantageously, the edges are identified by the analysis of the shadows present in said image, which can be accentuated by illuminating the model laterally by an artificial light source. Advantageously, the additional data extracted from the database and projected onto the image of the model represent at least one of the following elements: - two-dimensional cartographic information; - three-dimensional cartographic information; - meteorological information; - routes; - annotations; - points of interest; - animations relating to the object; - characteristics of the object; - direct links to websites. These data can be in two or three dimensions. They can be available as two or three-dimensional images. They can be projected onto a plane perpendicular to the direction of shooting. They can be projected onto a surface corresponds to the captured relief portion. It is possible to take into account occlusions. The invention also relates to a product comprising a computer-readable medium, and instructions executable by computer on the computer-readable medium to bring a multimedia electronic device comprising a processor, a camera and a display device for setting implements the above process. Brief description of the drawings [0030] Other details of the invention will appear more clearly in the description which follows with reference to the appended figures, which show: Fig. 1: a system according to the invention; Fig. 2: an example of using the system from a user's point of view; Fig. 3: another example of using the system from a user's point of view; Fig. 4: a flowchart illustrating the process of the invention; Fig. 5a and 5b: two examples of cartographic information for projecting onto a view of a geographic relief model; Fig. 6: a flowchart illustrating a process for manufacturing a model intended to be used in the invention; CH 712 678 A2 Fig. 7: a representation of a set of point cloud data used as the basis for the manufacture of a model; Fig. 8a and 8b: representations of the limitations on the diameter of the tip of the machining tool; Fig. 9a and 9b: two-dimensional illustrations of a finishing tool and the trajectory of the corresponding finishing tool; Fig. 10a: a view of a substrate, a roughing tool and the trajectory of the corresponding roughing tool; Fig. 10b: a view of a substrate after roughing; Fig. 10c: a three-dimensional view of a portion of a substrate after finishing machining, with a trajectory of a finishing tool; Fig. 11: a view of a model fitted with anti-counterfeiting measures; Fig. 12a-d: a schematic representation of the operation of a direct post-processor for generating a tool trajectory from a point cloud Fig. 13: a schematic representation illustrating the method of processing a point cloud before a machining step according to the invention, Fig. 14: an embodiment of a system according to the invention. Embodiment of the invention [0031] FIG. 1 illustrates a system 1 according to the invention. The system 1 comprises a multimedia electronic device 3, for example a “smartphone”, a tablet or the like, provided with a camera 5 (illustrated diagrammatically in FIG. 1 by dotted lines representing its field of vision) on a first side of it, and a display screen 7 on a second side thereof. As it is generally known, such a multimedia electronic device 3 further comprises a processor and a computer-readable storage medium contained therein. These last two elements are contained as is conventional in the device 3 and are therefore not illustrated. The computer-readable storage medium contains, stored thereon, a computer program product (colloquially known as an "application"), which includes computer-executable instructions for carrying out the method described below. In a variant, part or all of this method is implemented by a remote server accessible by the electronic device 3. The multimedia electronic device 3 would typically be provided by a user himself, but this need not necessarily be the case. The system 1 further comprises a model 9 ("relief rendering") representing a geographic relief characterized by point cloud. In the case of fig. 1, the subject of model 9 is the topography of Switzerland, and is therefore a relief map. The model 9 is fabricated by machining using a high resolution numerical control (CNC) machine tool from a plastic, metal, wood or similar substrate. A particularly suitable method of such CNC machining with sufficient resolution is described in more detail below. Alternatively, CNC machining of an injection molding or a die casting mold is also possible, followed by an injection molding, a casting molding, a blow molding of plastic or light materials (aluminum, ceramic, etc.) or heavy (concrete, etc.) to produce the model 9 is also possible, the substrate 45 then being the base for the mold in this case. As mentioned above, in fig. 1, model 9 represents the topography of Switzerland. It can obviously represent any different geographic region on an appropriate scale (a country, a region, a state, a municipality, a street, or even a park). The model 9 can be left colorless, or can be painted, coated, or have information already provided above. Typically, however, the mock-up 9 should not have data directly on it, so as to encourage use of the entire system. It may be advantageous, although not essential, to install the model 9 in an enclosure 11 with controlled lighting, such as an artificial light source, such as a lamp 13 for lighting the model 9 from a side (for example from above in this figure), so as to cast shadows and therefore improve the contrast of the image received by the camera 5. Such increased contrast is useful to simplify the processing required during handling digital image of the mock-up 9 so as to detect the cropping, position, angle, orientation and so on. The system 1 also includes a database 15, which can either be stored locally in the electronic device 3 or remotely and accessible via a direct or indirect wireless connection such as WLAN, WiFi, Bluetooth, 3G, 4G or similar. The database 15 includes additional data relating to the mock-up portion 9 CH 712 678 A2 of which an image has just been captured, such as surface textures, cartographic information (for example place names, contour lines, altitudes), points of interest, meteorological information, educational information, routes, for example hiking, biking, cycling routes, recommended or previously traveled by the user and recorded via a satellite geolocation receiver, for example a GPS receiver. The additional data may include annotations, for example texts, which can be placed more or less freely on the image captured in relation to an element of this image. They can include textures (for example portions of geographic maps) or images (for example satellite images, photos, computer generated images, meteorological images etc.) which must be superimposed precisely on the captured image, in projecting them onto the terrain. The data may include still images, for example photographs or graphics. They may include moving images. They can include sounds. The additional data may include two-dimensional images, for example maps, satellite images, topographical surveys, videos etc., intended to be projected onto the two-dimensional image of the model captured with the camera. They can also include three-dimensional images, for example a synthetic model or an image captured with a three-dimensional camera or a stereo or distance-measuring camera. The additional data may include identical generic data which are identical for all the users of the system having captured the same image, and / or personal data, for example data corresponding to a user's journey. The user can select the type of additional data he wishes to display if several different types are provided. The database 15 can include information relating to several models 9, one of which is selected automatically, for example by recognizing a relief on the image, or using a bar code, a QR- code on the image, or by manually selecting a model 9. This point is discussed in more detail below in the context of the process of the invention. [0042] FIG. 2 illustrates the system according to the invention seen from the point of view of a user. In the present case, the user visualizes on a display device 7 of his device an image obtained from the captured image data and on which at least a part of the additional data is superimposed. In this example, the image data corresponds to a shooting of a model of Switzerland in relief, while the additional data includes cartographic information which is combined on the image of this relief, and correctly aligned relative to to this relief. The image data can for example be the image directly taken by the image sensor and displayed on the screen of the device. Preprocessing is also possible, for example to correct brightness, white balance, contrast, or to reduce shadows and reflections for example. It is also possible to reconstruct an image different from those taken by the camera, for example by combining several successive frames to generate a panorama, or to generate an image taken from a virtual point of view different from the real point of view. FIG. 3 illustrates the display of a mock-up 9 of part of a three-dimensional model of plant leaf with the multimedia electronic device 3. To view the small details, an optical adapter 17 can be provided on the device's camera 5 electronics 3 so as to increase the magnification and thus achieve sufficient zoom and sufficient resolution for the operation of the method of the invention in the case where the model and the corresponding additional data are of very high resolution. [0045] FIG. 4 illustrates the process of the invention in its simplest form, in the form of a diagram. In step 21, the elements of system 1 as described above are provided. Typically, the mock-up 9 would be supplied in a public place or in a public or private building, in the form of a three-dimensional mock-up made of wood or another material. The electronic device 3 can be provided by the user himself, although this need not be the case. The computer program product can be pre-loaded on the electronic device 3, supplied on a portable data carrier, or can be downloaded from a wireless network (Internet, local Wi-Fi, Bluetooth or similar). To this end, the model 9 or its surroundings can indicate how to download the computer program product, for example, by scanning a QR code (Quick Response Code) or a bar code, by connecting with an identification device without wire 53 (see fig. 11) such as, but not limited to, an RFID (Radio Frequency Identification tag) label, a WiFi or Bluetooth modem, or by visiting a specified Internet address. In step 22, the user takes an image of the model 9, which would typically be a video image (that is to say a certain number of frames per second). The device then generates image data corresponding to the captured portion of the model. This image data can undergo optional preprocessing, for example to change the brightness, contrast, white balance, or point of view. In step 23, the model 9 is identified. This can be done by various methods. The computer program product can contain an algorithm which identifies the model 9 directly from the captured image, and by comparing it with one or more reference models in order to determine whether the captured image corresponds to a known model, and what portion of this model it corresponds to. This identification can implement a classifier, CH 712 678 A2 for example a classifier based on a neural network or the like. The reference model can be a two-dimensional model or advantageously a three-dimensional model of the relief. This identification can in particular be based on areas of interest (characteristic elements, called "features" in image recognition) of the captured image, for example characteristic points of the captured image and of the model. reference. In an advantageous embodiment, the image recognition software uses edges of elements (edges, corners, depressions and so on), in particular those accentuated by shadows. In the case where the database 15 includes information relating to a single model 9, this is identified, insofar as the computer program product determines that the image received is indeed that of the model 9 and not that of a wall, floor, or other object. In addition, the exact part of the imaged model 9 is identified, with detection from the point of view. The latter can be determined solely on the basis of the image or of successive frames captured by the camera 5; however, it can also use information relating to the orientation in space of the electronic device 3, determined by appropriate sensors incorporated in the electronic device 3, for example a compass, an accelerometer, a gyroscope, an inclinometer, etc. In the case where the database includes information relating to multiple models 9, the algorithm can identify which model 9 is captured by the camera 5, as well as the exact part of the imaged model 9 and the point of view as described above. The model 9 can also be identified by taking the image and identifying a logo, one or more words, numbers or symbols, a barcode or a QR code, connecting with a Bluetooth or WiFi transducer. In addition, this identification can be carried out by comparison of the GPS position of the electronic device 3, as determined by a GPS transducer incorporated therein, with a table of known locations of the models 9 integrated in the database 15. In this case, step 23 can be carried out before step 22, as indicated by the double arrow in FIG. 4. The captured model 9 can also be identified by manual selection by the user, for example in a list. The identification of the model 9 may also include a step of determining the authenticity of the relief, for example by checking whether the means of identification mentioned above are contained in a database of approved models. Fig. 11 shows other possibilities, namely: - constitution of one or more deliberate errors 50 (for example errors of scale in a part of the model, errors of fact, such as a lake or a mountain or another geographical element added or removed) and their detection; - incorporation of codes, website addresses 51 or similar around the model 9; - harmless aesthetic characteristics of the surroundings of the model 9, such as the bars 52; - a wireless identification device 53 such as an RFID chip, a WiFi or Bluetooth transponder or the like. In the case where the model 9 is determined to be a forgery on the basis of one of the measures mentioned above, the computer program product can stop the process and provide an indication to the user as to what the model 9 is not authentic. In step 24, the database 15 is consulted so as to extract additional data relating to the model 9 and / or part of it being captured by the camera 5. This information is relevant to the particular model 9. In the case of a geographic relief model, the additional data may be two-dimensional or three-dimensional cartographic images, may include annotations of points of interest, geological information, historical information, weather reports (e.g. cloud or weather overlay), day-night rendering, planned or traveled route overlay, or the like. In all cases, the additional data extracted from the database 15 are complementary to any information provided directly on the model 9. In addition, audio and video information can also be provided. In step 25, this additional data is projected onto the video image taken by the camera 5 so as to create a combined video image integrating both the image data taken by the camera 5, fixed or animated, raw or preprocessed, and also the additional data, fixed or animated, extracted from the database 15. This calculation is performed in real time, that is to say without perceptible delay for the user, in order to restore to it in real time on the screen of their device a modified and enriched video image compared to the captured image. The user thus sees a modified image of the model, or of a portion of the model, which he has before his eyes. He can change his point of view, or zoom in on a detail, simply by moving his device relative to the model. The projection may include a step of geometric transformation of the additional data in a plane perpendicular to the direction of shooting. For example, in the case of projections of a geographic map or other cartographic data on the image of a topographic model, the projection may include a step of calculating the image of this map from the point of view of the camera, in order to superimpose images taken from the same point of view. CH 712 678 A2 In the case of an image corresponding to a model comprising a significant relief, for example a mountain relief, and / or reference data corresponding to a three-dimensional model, the projection may include a calculation step the color value of several dots in three-dimensional space, from image data captured in one or more frames and additional data in 2 or 3 dimensions. A two-dimensional image of this space from the point of view of the camera is then calculated by projection, then rendered. The projection includes an alignment of the captured image data with the additional data. This alignment can be based on the previously identified features, in particular of the (sharp) edges of the relief (edges, depressions, corners and so on, which are particularly accentuated by shadows), and on a correspondence of these features with features of reference in the reference information in the database 15. As shown in fig. 5a, the superposition can therefore take place in two dimensions, that is to say by projecting "flat" two-dimensional information 19 on the image of the three-dimensional model. Alternatively, and as shown in fig. 5b, the additional data 19 in two or three dimensions can be “wrapped” on the image of the model 9 in three dimensions so that the superimposed data follow the profile of the relief, taking into account the occlusions, the point of view, shadows etc. The latter naturally requires much more computing power, but it offers many more possibilities for providing extremely detailed information, in particular in cartographic applications. In particular, 3D information allows the detection and the representation of more vertices, more shadows and so on, and for example allows to visualize or annotate a feature hidden by portions of the model 9 as a passing road behind a mountain in the view of the model 9. By moving the point of view you can also make certain data invisible when the corresponding elements pass out of view. In addition, the superimposed additional data may be different depending on the point of view of the multimedia electronic device 3. In step 26, the combined image generated in step 25 is displayed on the screen 7 of the electronic device 3 so that the user can display it. So that the electronic device can discern sufficient details of the model 9 so as to correctly analyze the position, the angle and the cropping of the image of the model 9 so that the process can be correctly carried out, the model 9 (or its mold in the case of a molded model 9) must be machined to a sufficient degree of detail, as will be described in the following, with reference to FIGS. 6-10 and 12a-d. In particular, the manner of machining (and in particular of calculating the trajectories of the tool) ensures an exact and precise representation of the edges which are used for recognizing the angle and the orientation of the model 9 as taken in image by the electronic device 3. The object of the model 9 (that is to say the topography of the illustrated geographical region) is modeled or sampled by a cloud of points 40 representing points on the surface of the object. These point clouds are not always provided in a standardized grid, and are conventionally first converted into a network of geometrically defined surfaces using cubic spline functions or any other suitable method. From this network of surfaces, tool movement instructions are generated, taking into account the shape of the tool tip. However, these approaches can lead to artifacts such as unnatural transitions between arbitrarily derived surfaces, and joining small partial renders together also leads to unacceptable artifacts at the junction between these partial renders. Furthermore, such approaches become increasingly difficult to calculate as the desired degree of detail increases, exponentially requiring more computing power. The process as described below eliminates these artifacts, and is infinitely scalable, the only limitations being available machine time and tool wear. Such a point cloud 40 is provided in step 30, and an example thereof is shown in fig. 7, which represents a point cloud of a topographic map of Switzerland, at a given resolution. Each point 40a of the point cloud includes spatial information on the point measured in three perpendicular axes X, Y and Z, X and Y being the axes in the horizontal plane, and Z being the vertical axis. In the case of topographic maps, they are generally expressed in longitude (X), latitude (Y) and altitude (Z) respectively. Points can be sorted in the XY plane by sorting first by the ascending X value, then by the ascending or descending Y value (or vice versa) so as to arrange them in two dimensions in the XY plane. In steps 31 and 32, the point cloud 40 is converted directly into movement instructions along the same axes X, Y and Z, X and Y again being in the horizontal plane and Z being the vertical axis. These movement instructions are simply the coordinates of points in the space through which the tip of the finishing tool 42 must pass. It is calculated by a so-called direct post-processor, which is a part of software executed on a computer. The operation of this direct post-processor is illustrated in figs. 12a-12d. This direct post-processor first takes a point cloud 40, as illustrated in FIG. 12a, which can be grouped together from a plurality of smaller files each representing a part of the subject of the model 9, then scales the data to the required scale depending on the size of the model 9 to be trained, taking into account the requirements linked to machining technologies (correlations between the grids and the capacities linked to the tool used), and with reference to an appropriate reference point for the CNC machine tool. Any conversion of units (e.g. longitude / latitude to millimeters or micrometers) can also take place at this time. This is shown in fig. 12b, which shows the individual data points 40a CH 712 678 A2 superimposed on a grid 40b of the desired resolution in the XY plane. In step 31, the data points 40a are normalized with respect to the grid 40b, by modifying the X and Y values of each data point 40a so as to correspond to an intersection of the grid 40b. Any excess points, such as intermediate points where all the grid intersections are already occupied can be deleted (as indicated by an "x"), or can also be merged with another point or points at an intersection of the grid by averaging these points, for example. In addition, all the unoccupied intersections of the grid can either be filled, for example by interpolation of neighboring points, or simply left empty. The resulting normalized mesh points 40c are shown in FIG. 12c, with padding data 40d represented by empty circles rather than black dots. In step 32, by reading each line of the normalized mesh points 40c according to a front-and-back scan as illustrated in fig. 12d, a series of machining passes, first in one direction and then in the opposite direction for the next line, a tool path 43 can be directly read on the standardized grid points 40c without further calculation. Alternatively, a raster scan pattern can be generated by reading the data from the grid, each line being read in the same direction, even if this is less effective in terms of machine time due to the time required to reverse the tool 42 at start of the next cut. In the case of missing data points, if the filling has not been carried out, the instructions simply advance to the next intersection of the filled grid in the sequence. A header block 43a can also be added to initialize the CNC machine tool, and a footer block 43b can be added to complete the machining, as required. In the case where the point cloud 40 is already gridded, the normalization carried out by the direct post-processor can simply scale the point cloud to the desired resolution. Due to the large amount of data required to be processed, parallel processing can be used. The post-processor can also carry out a selection of a subset of the points constituting the point cloud 40, for example to reduce the resolution of this point to a manageable level, for example, take all the 10th data points in order to reduce the resolution by a factor of 10. Compared to a conventional approach for defining surfaces and calculating the movements of the tool required to form these surfaces with a given form of the tool, the post-processor approach direct is best used with a tool whose tip is small enough so that machining errors resulting from the shape of the finishing tool 42 are minimized, which is retained in step 33. This principle is illustrated on Figs. 8a and 8b. These figures illustrate the machining paths 43 formed by meshed points of standardized data 40c. Fig. 8a illustrates the resulting profile 43c cut with a finishing tool 42 of too large diameter. As can clearly be seen, this profile 43 differs significantly from the path 43 determined by the points of the first set. In particular, the large diameter of the end of the tool 42 causes collisions with the steepest sides of the relief, and prevents the machining of narrow holes or right angles at the foot of vertical cliffs, for example. The size of the finishing tool must therefore be sufficiently fine to allow machining by passing directly over the points of the first set, without the radius offsets usually calculated by CAM programs. Fig. 4b illustrates a finishing tool 42 whose diameter of the tip is small enough so that the finished profile is within an acceptable tolerance with respect to the machining path 43. A tool that is too thin, however, risks breaking frequently; on the other hand, the curvature of the end of the tool is too pronounced, resulting in the machining of grooves forming visible ridges and ribs on the surface. It is therefore advantageous to use a finishing tool whose diameter at the end is greater than the pitch between two points of the first set of points, so that the grooves overlap while avoiding leaving a rib between two grooves. Optimal results have been obtained with finishing tool diameters between 2-10 times the pitch between the points of the first set of points. In practice, for a very large model 9 of several meters in width, a finishing tool whose tip has a diameter of 1 mm may be sufficient. For smaller renderings, a tip diameter of the finishing tool of 0.05mm or less has been found to be the maximum diameter to provide a level of detail sufficient in practice for the image manipulation required in the electronic device. multimedia 3 as described above. In addition, the level of resolution and therefore of detail provided by a tool of these dimensions is sufficient for a useful degree of zoom to be able to be used on the camera of the multimedia electronic device. In principle, the smaller the tip of the finishing tool, the greater the precision and resolution of the model 9, but the longer the machine time required, the more complex the management of tool wear and the more grid resolution required is great. In step 32, an appropriate finishing tool is chosen for this purpose. The path 43 of the finishing tool is also illustrated in a three-dimensional view in FIG. 10c. The finishing tool 42 is illustrated in FIGS. 9a and 9b as being a conical tool. Other forms of tool (cylinder, sphere, toric ...) are also, of course, possible. Advantageously, the finishing tool 42 is mounted on a 6-axis robot in order to improve the precision and the resolution of the model 9. In this way, the angle of the finishing tool 42 can be adapted to such that the axis of the finishing tool 42 is always perpendicular to the machining path 43. CH 712 678 A2 Compared to conventional CNC machining based on the combination of standard shapes and standard movements of the tool (arcs, straight lines, grooves and so on), the present invention provides the possibility of have many more details and therefore more precise reproduction of the form on which the model is based 9. In step 34, a roughing tool is chosen so as to effectively make a first cut in the substrate, in order to prevent the finishing tool from having to cut too deep and remove too much material, this which would cause excessive wear of the tool, excessive chip formation and high stresses on the finishing tool 42 with such a high risk of breaking the tool. However, it is not essential that a rough cut be made. On the basis of the trajectory of the finishing tool 43 and the shape of the finishing tool 42, a trajectory of the roughing tool 46 (see fig. 10a) can be calculated in step 35 The roughing tool 44 has a larger cutting diameter than that of the finishing tool 42, and serves to sufficiently remove the excess material from the substrate 45 in which the model (or its mold) 9 is machined, so as to leave enough material to make a high quality finishing cut, but not too much so as not to strain the finishing tool 42. Subsequently, a substrate 45 is provided in a CNC machine tool in step 36, which is then roughed out (step 37 - optional) and subjected to finishing operations (step 38) using the tools. 42, 44 and the respective tool paths 43, 46. FIG. 10b illustrates a substrate 45 after roughing, and FIG. 10c illustrates the substrate after the finishing operations along the path of the finishing tool 43 so as to create a model 9 (or its mold in the case of a molded model 9). Although the use of the direct post-processor described above can lead to the production of a large number of redundant points. For example, the surface of a lake can be described as a flat surface and can therefore be defined more effectively with conventional CNC operations. It does not result in any irregular surface being describable and thus being machined with precision. Advantageously, with reference to FIG. 13, the point cloud representing points on the surface of the object, measured by laser 54, scanning electron microscope, confocal, or any suitable device well known to those skilled in the art, is continuously sent and processed by the direct post-processor described above and the processed point cloud is sent continuously to the CNC 55 which continuously converts said point cloud into instructions for moving the tip of the finishing tool 42 along the axes. Consequently, the system according to the invention operates continuously to reduce the processing time. Incidentally, with reference to FIG. 14, the system 1 according to the invention further comprises a light source 56, such as a laser for example, illuminating the model 9 which comprises structures generating a holographic or diffractive effect. For example, the model 9 comprises surface diffraction gratings providing at least one diffracted image when the model 9 is lit by the laser source 56. The microstructures in holographic reflection grid and / or the diffractive effect generation structures are well known to those skilled in the art. Such microstructures are described in particular in documents WO 93/18 419, WO 95/04 948, WO 95/02 200, US 4,761,253 and EP 0 105,099. It is understood that the present invention is in no way limited to the embodiments described above and that many modifications can be made thereto without departing from the scope of the appended claims.
权利要求:
Claims (14) [1] claims 1. Method for displaying cartographic data, comprising the following steps: - capture using a multimedia electronic device (3) comprising a camera (5) of video image data representing a model of a geographic relief; - identification of the relief, by comparison with at least one geographic relief reference model, so as to determine whether the captured image corresponds to a known model, and to what portion of this model it corresponds; - identification of features in said image data; - access to additional data relating to the relief identified; - alignment of the captured image data with the additional data, based on said features; - display on a display device (7) of said device of an image obtained by superimposing in real time of said additional data on said image data. [2] 2. Method according to claim 1, said additional data comprising a texture, the method comprising a step of projecting said texture onto said image data. [3] 3. Method according to one of the preceding claims, said additional data comprising a route. [4] 4. Method according to one of the preceding claims, said additional data comprising map information and / or points of interest. [5] 5. Method according to one of the preceding claims, said additional data comprising meteorological information. CH 712 678 A2 [6] 6. Method according to one of the preceding claims, said additional data comprising moving images. [7] 7. Method according to one of the preceding claims, said additional data comprising two-dimensional images. [8] 8. Method according to one of the preceding claims, said additional data comprising three-dimensional images. [9] 9. Method according to one of the preceding claims, comprising a step of projecting said additional data onto a plane perpendicular to the direction of shooting. [10] 10. Method according to one of the preceding claims, comprising a step of projecting said additional data onto said relief seen from the point of view of the camera (5). [11] 11. Method according to one of the preceding claims, further comprising the provision of a model (9), said model being manufactured from a substrate (45) by a method comprising a machining step in which the movements of a cutting tool (42) having a point are expressed in the form of a tool path (43) along the perpendicular axes X, Y and Z in a grid in an XY plane calculated on the basis of a cloud of points (40) generated by the measurement of a three-dimensional object, said point cloud (40) comprising a plurality of points (40a) which are normalized in a grid of standardized points through which the tip of the tool cutting (42) must pass, said tool path (43) being generated by following said grid of standardized points (40c) sequentially. [12] 12. Method according to any one of the preceding claims, in which said step of identifying the relief (9) is carried out by at least one of the following steps of: - imagery and identification of a code provided on or adjacent to the model (9); - communication with a wireless identification device (53) provided in or near the model; - determination of a geographic location of the multimedia electronic device (3) and comparison of this geographic location with a database comprising a register of the geographic locations of said models. [13] 13. Method according to any one of the preceding claims, further comprising a step consisting in determining whether said model (9) is authorized by means of at least one of the following steps: - Analysis of landmarks (51,52) provided on or adjacent to said model (9); - communication with a wireless identification device (53) provided in or near the model (9); - detection and analysis of a voluntary error (50) provided on the model (9). [14] 14. Product comprising a computer-readable medium, and a computer program product comprising computer-executable instructions on the computer-readable medium for supplying an electronic multimedia device (3) comprising a processor, a camera (5) and a display (7) carrying out the method comprising the steps of one of the preceding claims. CH 712 678 A2 ^ ei «Ο ......................... .. ** .. · ·· <*. ·> ·. ·· * · «..>« % /:>;'> Χ. ·. "'··> ^^, · κ1>. ·> · · Ν ··· £ · · <<': ·· <' · -s-” ·· ^: -1: • . v vfc <v · sj-Xs ^ · Λ ·. s ^ '···> Ά ·' ^ ··; ·. "'ν <Χ ^ · ^>&! ί ** <| ·> ^ 7ί ·>' ^ · ν * · χ : V 1¾¾¾ ¾¾ ^ ^ ^ ¾¾¾¾¾¾ ¾¾¾¾ . · Λ » ν · Α · · | ggg» g BilBilliil gilililì 1111¾¾¾ 8¾¾¾ æsslsissslM j | g | ggg ιΒ |! §88§ 8/8 ^ 887/8 / 8¾¾¾¾¾¾¾¾¾¾¾.¾¾ ^ 8 // 8/888888888/8/888/8 ^ 8: '; 8/88888888 - $. ^ Jrigfrf ^ y ··: · Ϊβ8β8ββ88 ^ SSf / 88 / 811¾¾¾¾¾ *; »VÎ .S. ,. \ * <kj $ *, ". V" . · ^^ 8888 //: 78808¾¾ ^ ¾¾¾¾. ·, \ ζ · ϊ ·}. $ Λ · "'i" ··': . 2¾ ·: · 8¾¾¾¾¾¾¾¾¾ ¾¾¾¾ ^ ^ ^ .¾¾. 78.6 / 78788 // 7 /:/::-:-:-:::::-::-::::::/:::/:::- 888¾: - *. ··· ^ 912 CH 712 678 A2 CH 712 678 A2 Provision of a relief rendering and of a portable electronic device provided with a computer program product 2.1 Capture of the relief rendering CH 712 678 A2
类似技术:
公开号 | 公开日 | 专利标题 EP2923330B1|2016-12-21|Method of 3d reconstruction and 3d panoramic mosaicing of a scene EP3401879A1|2018-11-14|Method for modelling a three-dimensional object from two-dimensional images of the object taken from different angles FR2798761A1|2001-03-23|METHOD FOR CONSTRUCTING A 3D SCENE MODEL BY ANALYZING IMAGE SEQUENCE CA2856464A1|2013-06-20|System for filming a video movie FR3038090A1|2016-12-30|INTERACTIVE DEVICE WITH THREE DIMENSIONAL DISPLAY WO2008107554A2|2008-09-12|Method and device for creating at least two key images corresponding to a three-dimensional object Dellepiane et al.2011|Using digital 3D models for study and restoration of cultural heritage artifacts CA2965332C|2019-07-23|Method for collecting image data for producing immersive video and method for viewing a space on the basis of the image data FR2864878A1|2005-07-08|Digital image pixels` movement determining process, involves defining colors of shooting points of object in scene based on points` positions in another scene so that pixels` colors of image indicate pixels` positions in another image Samaan et al.2016|Close-range photogrammetric tools for epigraphic surveys CH712678A2|2018-01-15|Method of displaying map data Adami et al.2015|The bust of Francesco II Gonzaga: from digital documentation to 3D printing. WO2018229358A1|2018-12-20|Method and device for constructing a three-dimensional image WO2005010820A9|2005-03-17|Automated method and device for perception associated with determination and characterisation of borders and boundaries of an object of a space, contouring and applications Tucci et al.2012|A Defined Process to Digitally Reproduce in 3 D a Wide Set of Archaeological Artifacts for Virtual Investigation and Display Stricker et al.2014|Traveling in time and space with virtual and augmented reality FR3052287B1|2019-06-21|CONSTRUCTION OF A THREE-DIMENSIONAL IMAGE EP3482264A1|2019-05-15|Method for machining reliefs EP2192501B1|2015-10-28|Method for the data acquisition and method of providing a multimedia product for virtual visit CA3057337A1|2020-04-03|3d model texturation process Guillaume et al.2020|Photogrammetry of cast collection, technical and analytical methodology of a digital rebirth EP1864260A2|2007-12-12|Method for image reconstruction in a vector graphic WO2002039382A2|2002-05-16|Method for representing three-dimensional scenes in virtual reality and corresponding device Deschaud et al.2014|Colorisation et texturation temps r'eel d'environnements urbains par systeme mobile avec scanner laser et cam'era fish-eye Cieslik0|3D Digitization in Cultural Heritage Institutions Guidebook
同族专利:
公开号 | 公开日 FR3053778A1|2018-01-12| FR3053778B1|2019-12-13|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US2464793A|1942-08-08|1949-03-22|Lester Cooke Jr H|Method and apparatus for photographic scanning|
法律状态:
2019-01-31| PFA| Name/firm changed|Owner name: GRAVITY.SWISS SA, CH Free format text: FORMER OWNER: SPITZ AND TAL SA, CH |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1601054A|FR3053778B1|2016-07-05|2016-07-05|METHOD FOR DISPLAYING MAP DATA| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|